We consider distributed learning in the presence of slow and unresponsive worker nodes, referred to as stragglers. In order to mitigate the effect of stragglers, gradient coding redundantly assigns partial computations to the worker such that the overall result can be recovered from only the non-straggling workers. Gradient codes are designed to tolerate a fixed number of stragglers. Since the number of stragglers in practice is random and unknown a priori, tolerating a fixed number of stragglers can yield a sub-optimal computation load and can result in higher latency. We propose a gradient coding scheme that can tolerate a flexible number of stragglers by carefully concatenating gradient codes for different straggler tolerance. By proper task scheduling and small additional signaling, our scheme adapts the computation load of the workers to the actual number of stragglers. We analyze the latency of our proposed scheme and show that it has a significantly lower latency than gradient codes.
translated by 谷歌翻译
Over-the-air computation has the potential to increase the communication-efficiency of data-dependent distributed wireless systems, but is vulnerable to eavesdropping. We consider over-the-air computation over block-fading additive white Gaussian noise channels in the presence of a passive eavesdropper. The goal is to design a secure over-the-air computation scheme. We propose a scheme that achieves MSE-security against the eavesdropper by employing zero-forced artificial noise, while keeping the distortion at the legitimate receiver small. In contrast to former approaches, the security does not depend on external helper nodes to jam the eavesdropper's receive signal. We thoroughly design the system parameters of the scheme, propose an artificial noise design that harnesses unused transmit power for security, and give an explicit construction rule. Our design approach is applicable both if the eavesdropper's channel coefficients are known and if they are unknown in the signal design. Simulations demonstrate the performance, and show that our noise design outperforms other methods.
translated by 谷歌翻译
我们考虑分布式SGD问题,其中主节点在$ n $工人之间分配梯度计算。通过将任务分配给所有工人,只等待$ k $最快的工人,主节点可以随着算法的发展而逐渐增加$ k $,可以权衡算法的错误。但是,这种策略被称为自适应$ k $ -sync,忽略了未使用的计算的成本和向揭示出散布行为的工人进行交流模型的成本。我们提出了一个成本效益的计划,将任务仅分配给$ k $工人,并逐渐增加$ k $。我们介绍了组合多臂匪徒模型的使用来了解哪些工人在分配梯度计算时最快。假设具有指数分布的响应时间以不同方式参数的工人,我们会以我们的策略的遗憾(即学习工人的平均响应时间花费的额外时间)提供经验和理论保证。此外,我们提出和分析适用于大量响应时间分布的策略。与自适应$ k $ -sync相比,我们的计划通过相同的计算工作和较小的下行链路通信在速度较低的情况下,误差大大降低。
translated by 谷歌翻译
A well-performing prediction model is vital for a recommendation system suggesting actions for energy-efficient consumer behavior. However, reliable and accurate predictions depend on informative features and a suitable model design to perform well and robustly across different households and appliances. Moreover, customers' unjustifiably high expectations of accurate predictions may discourage them from using the system in the long term. In this paper, we design a three-step forecasting framework to assess predictability, engineering features, and deep learning architectures to forecast 24 hourly load values. First, our predictability analysis provides a tool for expectation management to cushion customers' anticipations. Second, we design several new weather-, time- and appliance-related parameters for the modeling procedure and test their contribution to the model's prediction performance. Third, we examine six deep learning techniques and compare them to tree- and support vector regression benchmarks. We develop a robust and accurate model for the appliance-level load prediction based on four datasets from four different regions (US, UK, Austria, and Canada) with an equal set of appliances. The empirical results show that cyclical encoding of time features and weather indicators alongside a long-short term memory (LSTM) model offer the optimal performance.
translated by 谷歌翻译
Current state-of-the-art deep neural networks for image classification are made up of 10 - 100 million learnable weights and are therefore inherently prone to overfitting. The complexity of the weight count can be seen as a function of the number of channels, the spatial extent of the input and the number of layers of the network. Due to the use of convolutional layers the scaling of weight complexity is usually linear with regards to the resolution dimensions, but remains quadratic with respect to the number of channels. Active research in recent years in terms of using multigrid inspired ideas in deep neural networks have shown that on one hand a significant number of weights can be saved by appropriate weight sharing and on the other that a hierarchical structure in the channel dimension can improve the weight complexity to linear. In this work, we combine these multigrid ideas to introduce a joint framework of multigrid inspired architectures, that exploit multigrid structures in all relevant dimensions to achieve linear weight complexity scaling and drastically reduced weight counts. Our experiments show that this structured reduction in weight count is able to reduce overfitting and thus shows improved performance over state-of-the-art ResNet architectures on typical image classification benchmarks at lower network complexity.
translated by 谷歌翻译
Named Entity Recognition and Intent Classification are among the most important subfields of the field of Natural Language Processing. Recent research has lead to the development of faster, more sophisticated and efficient models to tackle the problems posed by those two tasks. In this work we explore the effectiveness of two separate families of Deep Learning networks for those tasks: Bidirectional Long Short-Term networks and Transformer-based networks. The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages. The purpose of this paper is to present a comparative study of the two groups of networks for both languages and showcase the results of our experiments. The models, being the current state-of-the-art, yielded impressive results and achieved high performance.
translated by 谷歌翻译
我们研究了欧洲排放津贴(EUA)的价格,从而分析了它们对相关能源市场的不确定性和依赖性。我们提出了一个概率的多元条件时间序列模型,该模型利用数据的关键特征。在广泛的滚动窗口预测研究中评估了提议模型和各种竞争模型的预测性能,涵盖了将近两年的样本外。因此,我们预测了30步。多元概率预测的准确性由能量评分评估。鉴于俄罗斯对乌克兰的入侵,我们还讨论了着重于波动性溢出和随时间变化的相关性的发现。
translated by 谷歌翻译
尽管当代的大语言模型(LMS)表现出令人印象深刻的提问功能,但它们的答案通常是单个呼吁模型的产物。这需要不受欢迎的不透明度和损害性能,尤其是在本质上是多步骤的问题上。为了解决这些局限性,我们可以通过一个过程通过因果结构反映了问题的基本逻辑结构的过程来展示如何制作LMS来执行忠实的多步推理。我们的方法是通过将推理步骤链接在一起的,每个步骤都来自调用两个微调的LMS,一个用于选择,一种用于推理,以产生有效的推理跟踪。我们的方法在推理轨迹的空间中进行了光束搜索,以提高推理质量。我们证明了模型对多步逻辑推论和科学提问的有效性,表明它在最终答案的准确性上优于基准,并生成可解释的人类解释的推理痕迹,其有效性可以由用户检查。
translated by 谷歌翻译
抽象推理是智能系统的关键能力。大型语言模型在抽象推理任务上实现了高度的性能,但表现出许多缺陷。但是,人类的抽象推理也是不完美的,并且取决于我们对推理问题内容的知识和信念。例如,人类对在日常情况下基于逻辑规则的逻辑规则比关于抽象属性的任意规则更可靠地理解。语言模型的培训经验类似地赋予了他们先前的期望,这些期望反映了人类的知识和信念。因此,我们假设语言模型会显示出类似人类的内容对抽象推理问题的影响。我们在三个逻辑推理任务中探讨了这一假设:自然语言推论,判断三段论的逻辑有效性和ison选择任务(Wason,1968)。我们发现,最新的大语言模型(具有7或700亿个参数; Hoffman等,2022)反映了这些任务中人类在人类中观察到的许多相同模式 - 像人类一样,模型对可信情况的理由更有效地理由不现实或抽象的。我们的发现对理解这些认知效应以及有助于语言模型表现的因素具有影响。
translated by 谷歌翻译
在本文中,我们提出了TAC2POSE,这是一种特定于对象的触觉方法,从首次触摸已知对象的触觉估计。鉴于对象几何形状,我们在模拟中学习了一个量身定制的感知模型,该模型估计了给定触觉观察的可能对象姿势的概率分布。为此,我们模拟了一个密集的物体姿势将在传感器上产生的密集对象姿势的接触形状。然后,鉴于从传感器获得的新接触形状,我们使用使用对比度学习学习的对象特定于对象的嵌入式将其与预计集合进行了匹配。我们从传感器中获得接触形状,并具有对象不足的校准步骤,该步骤将RGB触觉观测值映射到二进制接触形状。该映射可以在对象和传感器实例上重复使用,是唯一接受真实传感器数据训练的步骤。这导致了一种感知模型,该模型从第一个真实的触觉观察中定位对象。重要的是,它产生姿势分布,并可以纳入来自其他感知系统,联系人或先验的其他姿势限制。我们为20个对象提供定量结果。 TAC2POSE从独特的触觉观测中提供了高精度的姿势估计,同时回归有意义的姿势分布,以说明可能由不同对象姿势产生的接触形状。我们还测试了从3D扫描仪重建的对象模型上的TAC2POSE,以评估对象模型中不确定性的鲁棒性。最后,我们证明了TAC2POSE的优势与三种基线方法进行触觉姿势估计:直接使用神经网络回归对象姿势,将观察到的接触与使用标准分类神经网络的一组可能的接触匹配,并直接的像素比较比较观察到的一组可能的接触接触。网站:http://mcube.mit.edu/research/tac2pose.html
translated by 谷歌翻译